Goto

Collaborating Authors

 intelligent machine


Olga Tokarczuk Recommends Visionary Science Fiction

The New Yorker

The Nobel-winning author, whose newest book is out this week, discusses work by a few of her favorite writers. The Nobel Prize winner Olga Tokarczuk's fiction is known for its interest in the porosity of boundaries--between nations, between ethnicities, between fiction and reality, consciousness and dreams. As her novels and stories stage the constant flux of national borders, particularly in Eastern Europe (Tokarczuk is Polish), they also delight in supernatural and science-fictional elements. In " House of Day, House of Night," out from Riverhead this week, she writes, "All over the world, wherever people are sleeping, small, jumbled worlds are flaring up in their heads, growing over reality like scar tissue." Not long ago, Tokarczuk sent us some remarks about a few of her favorite sci-fi and speculative-fiction writers, whose books mix the fantastical and the prosaic masterfully.


Autonomous humanoid robot soccer debuts in China

FOX News

In a futuristic showdown that captured global attention, four teams of autonomous humanoid robots competed in China's first AI-powered soccer tournament. The event took place in Beijing's Yizhuang Development Zone as part of the Robo League robot football tournament, marking a significant milestone for real-world artificial intelligence competition in China. Sign up for my FREE CyberGuy Report Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you'll get instant access to my Ultimate Scam Survival Guide -- free when you join my CYBERGUY.COM/NEWSLETTER Unlike remote-controlled robot matches, this tournament featured zero human intervention. Each team had three active humanoid robots plus a substitute, playing two ten-minute halves with a five-minute break.


Superstudent intelligence in thermodynamics

Loubet, Rebecca, Zittlau, Pascal, Hoffmann, Marco, Vollmer, Luisa, Fellenz, Sophie, Leitte, Heike, Jirasek, Fabian, Lenhard, Johannes, Hasse, Hans

arXiv.org Artificial Intelligence

In this short note, we report and analyze a striking event: OpenAI's large language model o3 has outwitted all students in a university exam on thermodynamics. The thermodynamics exam is a difficult hurdle for most students, where they must show that they have mastered the fundamentals of this important topic. Consequently, the failure rates are very high, A-grades are rare - and they are considered proof of the students' exceptional intellectual abilities. This is because pattern learning does not help in the exam. The problems can only be solved by knowledgeably and creatively combining principles of thermodynamics. We have given our latest thermodynamics exam not only to the students but also to OpenAI's most powerful reasoning model, o3, and have assessed the answers of o3 exactly the same way as those of the students. In zero-shot mode, the model o3 solved all problems correctly, better than all students who took the exam; its overall score was in the range of the best scores we have seen in more than 10,000 similar exams since 1985. This is a turning point: machines now excel in complex tasks, usually taken as proof of human intellectual capabilities. We discuss the consequences this has for the work of engineers and the education of future engineers.


Passed the Turing Test: Living in Turing Futures

Gonçalves, Bernardo

arXiv.org Artificial Intelligence

The world has seen the emergence of machines based on pretrained models, transformers, also known as generative artificial intelligences for their ability to produce various types of content, including text, images, audio, and synthetic data. Without resorting to preprogramming or special tricks, their intelligence grows as they learn from experience, and to ordinary people, they can appear human-like in conversation. This means that they can pass the Turing test, and that we are now living in one of many possible Turing futures where machines can pass for what they are not. However, the learning machines that Turing imagined would pass his imitation tests were machines inspired by the natural development of the low-energy human cortex. They would be raised like human children and naturally learn the ability to deceive an observer. These ``child machines,'' Turing hoped, would be powerful enough to have an impact on society and nature.


Learning Machines: In Search of a Concept Oriented Language

Gunes, Veyis

arXiv.org Artificial Intelligence

What is the next step after the data/digital revolution? What do we need the most to reach this aim? How machines can memorize, learn or discover? What should they be able to do to be qualified as "intelligent"? These questions relate to the next generation "intelligent" machines. Probably, these machines should be able to handle knowledge discovery, decision-making and concepts. In this paper, we will take into account some historical contributions and discuss these different questions through an analogy to human intelligence. Also, a general framework for a concept oriented language will be proposed.


The Machine Ethics podcast: Good tech with Eleanor Drage and Kerry McInerney

AIHub

Hosted by Ben Byford, The Machine Ethics Podcast brings together interviews with academics, authors, business leaders, designers and engineers on the subject of autonomous algorithms, artificial intelligence, machine learning, and technology's impact on society. This episode we're chatting with Eleanor and Kerry on good technology and if it's even possible, that technology is political, watering down regulation, the magic of AI, the value of human creativity, how Feminism, Aboriginal, and mixed race studies can help AI development, the performative nature of tech, and more… Dr Kerry McInerney (née Mackereth) is a Research Fellow at the Leverhulme Centre for the Future of Intelligence at the University of Cambridge, where she co-leads the Global Politics of AI project on how AI is impacting international relations. She is also a Research Fellow at the AI Now Institute (a leading AI policy thinktank in New York), an AHRC/BBC New Generation Thinker (2023), one of the 100 Brilliant Women in AI Ethics (2022), and one of Computing's Rising Stars 30 (2023). Kerry is the co-editor of the collection Feminist AI: Critical Perspectives on Algorithms, Data, and Intelligent Machines (2023, Oxford University Press), the collection The Good Robot: Why Technology Needs Feminism (2024, Bloomsbury Academic), and the co-author of the forthcoming book Reprogram: Why Big Tech is Broken and How Feminism Can Fix It (2026, Princeton University Press). Dr Eleanor Drage is a Senior Research Fellow at the University of Cambridge Centre for the Future of Intelligence, and teaches AI professionals about AI ethics on a Masters course at Cambridge.


On Formally Undecidable Traits of Intelligent Machines

Fox, Matthew

arXiv.org Artificial Intelligence

Building on work by Alfonseca et al. (2021), we study the conditions necessary for it to be logically possible to prove that an arbitrary artificially intelligent machine will exhibit certain behavior. To do this, we develop a formalism like -- but mathematically distinct from -- the theory of formal languages and their properties. Our formalism affords a precise means for not only talking about the traits we desire of machines (such as them being intelligent, contained, moral, and so forth), but also for detailing the conditions necessary for it to be logically possible to decide whether a given arbitrary machine possesses such a trait or not. Contrary to Alfonseca et al.'s (2021) results, we find that Rice's theorem from computability theory cannot in general be used to determine whether an arbitrary machine possesses a given trait or not. Therefore, it is not necessarily the case that deciding whether an arbitrary machine is intelligent, contained, moral, and so forth is logically impossible.


Intelligent machines work in unstructured environments by differential neuromorphic computing

Wang, Shengbo, Gao, Shuo, Tang, Chenyu, Occhipinti, Edoardo, Li, Cong, Wang, Shurui, Wang, Jiaqi, Zhao, Hubin, Hu, Guohua, Nathan, Arokia, Dahiya, Ravinder, Occhipinti, Luigi

arXiv.org Artificial Intelligence

Efficient operation of intelligent machines in the real world requires methods that allow them to understand and predict the uncertainties presented by the unstructured environments with good accuracy, scalability and generalization, similar to humans. Current methods rely on pretrained networks instead of continuously learning from the dynamic signal properties of working environments and suffer inherent limitations, such as data-hungry procedures, and limited generalization capabilities. Herein, we present a memristor-based differential neuromorphic computing, perceptual signal processing and learning method for intelligent machines. The main features of environmental information such as amplification (>720%) and adaptation (<50%) of mechanical stimuli encoded in memristors, are extracted to obtain human-like processing in unstructured environments. The developed method takes advantage of the intrinsic multi-state property of memristors and exhibits good scalability and generalization, as confirmed by validation in two different application scenarios: object grasping and autonomous driving. In the former, a robot hand experimentally realizes safe and stable grasping through fast learning (in ~1 ms) the unknown object features (e.g., sharp corner and smooth surface) with a single memristor. In the latter, the decision-making information of 10 unstructured environments in autonomous driving (e.g., overtaking cars, pedestrians) is accurately (94%) extracted with a 40*25 memristor array. By mimicking the intrinsic nature of human low-level perception mechanisms, the electronic memristive neuromorphic circuit-based method, presented here shows the potential for adapting to diverse sensing technologies and helping intelligent machines generate smart high-level decisions in the real world.


Bayes in the age of intelligent machines

Griffiths, Thomas L., Zhu, Jian-Qiao, Grant, Erin, McCoy, R. Thomas

arXiv.org Artificial Intelligence

The success of methods based on artificial neural networks in creating intelligent machines seems like it might pose a challenge to explanations of human cognition in terms of Bayesian inference. We argue that this is not the case, and that in fact these systems offer new opportunities for Bayesian modeling. Specifically, we argue that Bayesian models of cognition and artificial neural networks lie at different levels of analysis and are complementary modeling approaches, together offering a way to understand human cognition that spans these levels. We also argue that the same perspective can be applied to intelligent machines, where a Bayesian approach may be uniquely valuable in understanding the behavior of large, opaque artificial neural networks that are trained on proprietary data.


Design of the Artificial: lessons from the biological roots of general intelligence

Dehghani, Nima

arXiv.org Artificial Intelligence

Our fascination with intelligent machines goes back to ancient times with the mythical automaton Talos, Aristotle's mode of mechanical thought (syllogism) and Heron of Alexandria's mechanical machines. However, the quest for Artificial General Intelligence (AGI) has been troubled with repeated failures. Recently, there has been a shift towards bio-inspired software and hardware, but their singular design focus makes them inefficient in achieving AGI. Which set of requirements have to be met in the design of AGI? What are the limits in the design of the artificial? A careful examination of computation in biological systems suggests that evolutionary tinkering of contextual processing of information enabled by a hierarchical architecture is key to building AGI.